构建可靠的AI决策支持系统需要一组强大的数据来培训模型;在数量和多样性方面。在资源有限的设置或在部署的早期阶段中,获取此类数据集可能很困难。样本拒绝是应对这一挑战的一种方法,但是该领域的许多现有工作都不适合这种情况。本文证明了该立场并提出了一个简单的解决方案作为概念基线的证明。
translated by 谷歌翻译
Learned locomotion policies can rapidly adapt to diverse environments similar to those experienced during training but lack a mechanism for fast tuning when they fail in an out-of-distribution test environment. This necessitates a slow and iterative cycle of reward and environment redesign to achieve good performance on a new task. As an alternative, we propose learning a single policy that encodes a structured family of locomotion strategies that solve training tasks in different ways, resulting in Multiplicity of Behavior (MoB). Different strategies generalize differently and can be chosen in real-time for new tasks or environments, bypassing the need for time-consuming retraining. We release a fast, robust open-source MoB locomotion controller, Walk These Ways, that can execute diverse gaits with variable footswing, posture, and speed, unlocking diverse downstream tasks: crouching, hopping, high-speed running, stair traversal, bracing against shoves, rhythmic dance, and more. Video and code release: https://gmargo11.github.io/walk-these-ways/
translated by 谷歌翻译
Recent improvements in conditional generative modeling have made it possible to generate high-quality images from language descriptions alone. We investigate whether these methods can directly address the problem of sequential decision-making. We view decision-making not through the lens of reinforcement learning (RL), but rather through conditional generative modeling. To our surprise, we find that our formulation leads to policies that can outperform existing offline RL approaches across standard benchmarks. By modeling a policy as a return-conditional diffusion model, we illustrate how we may circumvent the need for dynamic programming and subsequently eliminate many of the complexities that come with traditional offline RL. We further demonstrate the advantages of modeling policies as conditional diffusion models by considering two other conditioning variables: constraints and skills. Conditioning on a single constraint or skill during training leads to behaviors at test-time that can satisfy several constraints together or demonstrate a composition of skills. Our results illustrate that conditional generative modeling is a powerful tool for decision-making.
translated by 谷歌翻译
Machine learning and deep learning-based decision making has become part of today's software. The goal of this work is to ensure that machine learning and deep learning-based systems are as trusted as traditional software. Traditional software is made dependable by following rigorous practice like static analysis, testing, debugging, verifying, and repairing throughout the development and maintenance life-cycle. Similarly for machine learning systems, we need to keep these models up to date so that their performance is not compromised. For this, current systems rely on scheduled re-training of these models as new data kicks in. In this work, we propose to measure the data drift that takes place when new data kicks in so that one can adaptively re-train the models whenever re-training is actually required irrespective of schedules. In addition to that, we generate various explanations at sentence level and dataset level to capture why a given payload text has drifted.
translated by 谷歌翻译
We address the general task of structured commonsense reasoning: given a natural language input, the goal is to generate a graph such as an event -- or a reasoning-graph. To employ large language models (LMs) for this task, existing approaches ``serialize'' the output graph as a flat list of nodes and edges. Although feasible, these serialized graphs strongly deviate from the natural language corpora that LMs were pre-trained on, hindering LMs from generating them correctly. In this paper, we show that when we instead frame structured commonsense reasoning tasks as code generation tasks, pre-trained LMs of code are better structured commonsense reasoners than LMs of natural language, even when the downstream task does not involve source code at all. We demonstrate our approach across three diverse structured commonsense reasoning tasks. In all these natural language tasks, we show that using our approach, a code generation LM (CODEX) outperforms natural-LMs that are fine-tuned on the target task (e.g., T5) and other strong LMs such as GPT-3 in the few-shot setting.
translated by 谷歌翻译
个性化联合学习允许分布式系统中的客户培训针对其独特本地数据量身定制的神经网络,同时利用其他客户的信息。但是,客户的模型在培训和测试阶段都容易受到攻击。在本文中,我们解决了对抗性客户在测试时间制定逃避攻击的问题,以欺骗其他客户。例如,对手可能旨在欺骗垃圾邮件过滤器和推荐系统,并接受了个性化联合学习培训以获得金钱收益。根据分布式学习的方法,对抗客户具有不同程度的个性化,从而导致“灰色盒子”情况。我们是第一个表征这种内部逃避攻击对不同学习方法的可转移性,并根据客户数据的个性化程度和相似性分析模型准确性和鲁棒性之间的权衡。我们介绍了一种防御机制PFEDDEF,该机制进行了个性化的联合对手培训,同时尊重抑制对抗性培训的客户的资源限制。总体而言,与联邦对抗训练相比,PFEDDEF将相对灰色的对抗鲁棒性提高62%,即使在有限的系统资源下也表现良好。
translated by 谷歌翻译
推理是人类认知和智力的关键支柱。在过去的十年中,我们目睹了自然语言处理的巨大收益和大型语言模型的前所未有的缩放。最近的工作表征了很少射击技术的能力,例如思想链,可以在大语言模型中模仿人类的推理。这个标志性的功能很少,连同不断扩展的语言模型相结合,打开了解决各种任务的可能性的远景,例如数学单词问题,代码完成和常识性推理。促使思想链(COT)通过提供中间步骤并敦促模型遵循相同的过程,从而进一步推动了模型的性能。尽管具有令人信服的性能,但在这些模型中推理能力的起源却很少探索。这项工作启动了对大语言模型中推理机制的更深入了解的初步步骤。我们的工作围绕查询模型,同时在提示中控制除一个组件以外的所有组件外:符号,模式和文本。然后,我们分析查询之间的性能差异。我们的结果表明,在提示中存在事实模式对于COT的成功并不是必需的。尽管如此,我们从经验上表明,仅依靠模式也不足以获得高质量的结果。我们认为文本具有常识性知识和意义。我们详尽的经验分析提供了定性的例子,说明了文本和模式之间的共生关系。这种对COT的系统理解使我们能够设计简洁的思想链,被称为CCOT,在其中修剪文本和模式只能保留其关键角色,同时以PAR或更高的求解任务率交付。
translated by 谷歌翻译
我们介绍了NLP社区Metasurvey的结果。从2022年5月到2022年6月,该调查引起了关于有争议的问题的意见,包括该领域的行业影响,对AGI和道德规范的关注。我们的结果将具体数字置于几个争议中:例如,受访者几乎完全将有关人工通用智能的重要性的问题分为一半,语言模型是否理解语言以及语言结构的必要性以及解决NLP问题的必要性。此外,调查提出了元问题,要求受访者预测调查响应的分布。这不仅使我们不仅可以深入了解NLP研究人员所拥有的各种信念,还可以揭示社区预测与现实不符的错误社会学信念。我们在各种问题上发现这种不匹配。除其他结果外,社区大大高估了其对基准的实用性的信念,以及扩展解决现实世界中问题的潜力,同时低估了其对语言结构,归纳偏见和跨学科科学重要性的信念。
translated by 谷歌翻译
具有输入序列长度的标准推理和基于变压器的体系结构的训练四倍。对于各种应用程序,尤其是在网页翻译,查询播放等方面,这非常大,因此,最近已经开发了几种方法来通过强制执行不同的注意力结构(例如稀疏性,低秩,使用内核)来加速注意计算。 。在这项工作中,我们将注意力计算视为最近的邻居检索的计算,并使用基于决策树的层次导航来降低每个查询令牌的检索成本,从线性序列长度从线性长度到几乎对数。基于这样的层次导航,我们设计了树形的树形,它可以使用两个有效的注意层之一 - TF - 注意和TC - 注意。 TF注意力以细粒的样式计算出注意力,而TC意见是一个粗糙的注意力层,它也确保梯度是“密集”的。为了优化此类具有挑战性的离散层,我们提出了一种两级自举训练方法。使用对标准NLP基准测试的广泛实验,尤其是对于长期序列,我们证明了我们的树形架构几乎可以像基线变压器一样准确,而注意力层则使用了30倍的失败。与Linform相比,在注意力层中使用类似的拖鞋时,准确性可能会高达12%。
translated by 谷歌翻译
我们提出了一个系统,用于准确预测各种刚性物体的稳定取向。我们建议通过使用条件生成模型准确地对接触表面进行分类,以克服旋转空间中多模式建模的关键问题。我们的系统能够从现实世界深度摄像机捕获的嘈杂和部分观察的点云观测中运行。我们的方法在模拟堆叠任务上大大优于需要高度准确旋转的当前最新系统,并在现实世界重新定向任务上展示了强大的SIM2REAL零拍传输结果。项目网站:\ url {https://richardrl.github.io/stable-reorientation/}
translated by 谷歌翻译